8 research outputs found

    Non Fourier Heat Transfer Across a Gun Barrel

    Get PDF
    In this project, study of hyperbolic heat transfer equation across a gun barrel is investigated. First, a comparison is made among several research work on parabolic and hyperbolic equations contributed in this field. Then, equations are formulated for non Fourier heat equations. These governing equations are normalized after which, a characteristic Peclet number is obtained which depends on local sound speed and thermal diffusion speed. A code was written in Matlab language for 3 different Peclet Numbers. Maccormack Predictor and Corrector algorithm was used to generate the program and plot graphs. The Peclet number is varied in the experiment, following which, a transition is seen from the hyperbolic nature to the parabolic curve

    Optimal Gossip Algorithms for Exact and Approximate Quantile Computations

    Full text link
    This paper gives drastically faster gossip algorithms to compute exact and approximate quantiles. Gossip algorithms, which allow each node to contact a uniformly random other node in each round, have been intensely studied and been adopted in many applications due to their fast convergence and their robustness to failures. Kempe et al. [FOCS'03] gave gossip algorithms to compute important aggregate statistics if every node is given a value. In particular, they gave a beautiful O(logn+log1ϵ)O(\log n + \log \frac{1}{\epsilon}) round algorithm to ϵ\epsilon-approximate the sum of all values and an O(log2n)O(\log^2 n) round algorithm to compute the exact ϕ\phi-quantile, i.e., the the ϕn\lceil \phi n \rceil smallest value. We give an quadratically faster and in fact optimal gossip algorithm for the exact ϕ\phi-quantile problem which runs in O(logn)O(\log n) rounds. We furthermore show that one can achieve an exponential speedup if one allows for an ϵ\epsilon-approximation. We give an O(loglogn+log1ϵ)O(\log \log n + \log \frac{1}{\epsilon}) round gossip algorithm which computes a value of rank between ϕn\phi n and (ϕ+ϵ)n(\phi+\epsilon)n at every node.% for any 0ϕ10 \leq \phi \leq 1 and 0<ϵ<10 < \epsilon < 1. Our algorithms are extremely simple and very robust - they can be operated with the same running times even if every transmission fails with a, potentially different, constant probability. We also give a matching Ω(loglogn+log1ϵ)\Omega(\log \log n + \log \frac{1}{\epsilon}) lower bound which shows that our algorithm is optimal for all values of ϵ\epsilon

    SynBench: Task-Agnostic Benchmarking of Pretrained Representations using Synthetic Data

    Full text link
    Recent success in fine-tuning large models, that are pretrained on broad data at scale, on downstream tasks has led to a significant paradigm shift in deep learning, from task-centric model design to task-agnostic representation learning and task-specific fine-tuning. As the representations of pretrained models are used as a foundation for different downstream tasks, this paper proposes a new task-agnostic framework, \textit{SynBench}, to measure the quality of pretrained representations using synthetic data. We set up a reference by a theoretically-derived robustness-accuracy tradeoff of the class conditional Gaussian mixture. Given a pretrained model, the representations of data synthesized from the Gaussian mixture are used to compare with our reference to infer the quality. By comparing the ratio of area-under-curve between the raw data and their representations, SynBench offers a quantifiable score for robustness-accuracy performance benchmarking. Our framework applies to a wide range of pretrained models taking continuous data inputs and is independent of the downstream tasks and datasets. Evaluated with several pretrained vision transformer models, the experimental results show that our SynBench score well matches the actual linear probing performance of the pre-trained model when fine-tuned on downstream tasks. Moreover, our framework can be used to inform the design of robust linear probing on pretrained representations to mitigate the robustness-accuracy tradeoff in downstream tasks

    GENERALIZING ROBUSTNESS VERIFICATION FOR MACHINE LEARNING

    No full text
    Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task. Although a lot of work has been done to quantify the robustness of DNN’s to ℓₚ norm bounded adversarial attacks there are still a few gaps between available guarantees and those needed in practice. In this thesis we focus on resolving two of these limitations. 1)While current verification methods mainly focus on the ℓₚ-norm threat model of the input instances, robustness verification against semantic adversarial attacks inducing large ℓₚ-norm perturbations, such as color shifting and lighting adjustment, are beyond their capacity. To bridge this gap, we propose a framework Semantify-NN to extend ℓₚ norm verification to semantic verification. 2) Randomized smoothing is a recently proposed defense against adversarial attacks that has achieved state-of-the-art provable robustness against ℓ₂ perturbations. A number of publications have extended the guarantees to other metrics, such as ℓ₁ or ℓ subscript ∞, by using different smoothing measures. Although the current framework has been shown to yield near-optimal ℓₚ radii, the total safety region certified by the current framework can be arbitrarily small compared to the optimal. We provide Higher Order Verification: a general framework to improve the certified safety region for these smoothed classifiers without changing the underlying smoothing scheme which allows the resulting classifier to be provably robust to multiple threat models at once.M.Eng

    Towards verifying robustness of neural networks against a family of semantic perturbations

    No full text
    Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task. While current verification methods mainly focus on the p-norm threat model of the input instances, robustness verification against semantic adversarial attacks inducing large p-norm perturbations, such as color shifting and lighting adjustment, are beyond their capacity. To bridge this gap, we propose Semantify-NN, a model-agnostic and generic robustness verification approach against semantic perturbations for neural networks. By simply inserting our proposed semantic perturbation layers (SP-layers) to the input layer of any given model, Semantify-NN is model-agnostic, and any p-norm based verification tools can be used to verify the model robustness against semantic perturbations. We illustrate the principles of designing the SP-layers and provide examples including semantic perturbations to image classification in the space of hue, saturation, lightness, brightness, contrast and rotation, respectively. In addition, an efficient refinement technique is proposed to further significantly improve the semantic certificate. Experiments on various network architectures and different datasets demonstrate the superior verification performance of Semantify-NN over p-norm-based verification frameworks that naively convert semantic perturbation to p-norm. The results show that Semantify-NN can support robustness verification against a wide range of semantic perturbations

    Materials Engineering with Swift Heavy Ions

    No full text
    corecore